6 research outputs found

    Target recognition for synthetic aperture radar imagery based on convolutional neural network feature fusion

    Get PDF
    Driven by the great success of deep convolutional neural networks (CNNs) that are currently used by quite a few computer vision applications, we extend the usability of visual-based CNNs into the synthetic aperture radar (SAR) data domain without employing transfer learning. Our SAR automatic target recognition (ATR) architecture efficiently extends the pretrained Visual Geometry Group CNN from the visual domain into the X-band SAR data domain by clustering its neuron layers, bridging the visual—SAR modality gap by fusing the features extracted from the hidden layers, and by employing a local feature matching scheme. Trials on the moving and stationary target acquisition dataset under various setups and nuisances demonstrate a highly appealing ATR performance gaining 100% and 99.79% in the 3-class and 10-class ATR problem, respectively. We also confirm the validity, robustness, and conceptual coherence of the proposed method by extending it to several state-of-the-art CNNs and commonly used local feature similarity/match metrics

    Histogram of distances for local surface description

    Get PDF
    3D object recognition is proven superior compared to its 2D counterpart with numerous implementations, making it a current research topic. Local based proposals specifically, although being quite accurate, they limit their performance on the stability of their local reference frame or axis (LRF/A) on which the descriptors are defined. Additionally, extra processing time is demanded to estimate the LRF for each local patch. We propose a 3D descriptor which overrides the necessity of a LRF/A reducing dramatically processing time needed. In addition robustness to high levels of noise and non-uniform subsampling is achieved. Our approach, namely Histogram of Distances is based on multiple L2-norm metrics of local patches providing a simple and fast to compute descriptor suitable for time-critical applications. Evaluation on both high and low quality popular point clouds showed its promising performance

    3D automatic target recognition for missile platforms

    Get PDF
    The quest for military Automatic Target Recognition (ATR) procedures arises from the demand to reduce collateral damage and fratricide. Although missiles with two-dimensional ATR capabilities do exist, the potential of future Light Detection and Ranging (LIDAR) missiles with three-dimensional (3D) ATR abilities shall significantly improve the missile’s effectiveness in complex battlefields. This is because 3D ATR can encode the target’s underlying structure and thus reinforce target recognition. However, the current military grade 3D ATR or military applied computer vision algorithms used for object recognition do not pose optimum solutions in the context of an ATR capable LIDAR based missile, primarily due to the computational and memory (in terms of storage) constraints that missiles impose. Therefore, this research initially introduces a 3D descriptor taxonomy for the Local and the Global descriptor domain, capable of realising the processing cost of each potential option. Through these taxonomies, the optimum missile oriented descriptor per domain is identified that will further pinpoint the research route for this thesis. In terms of 3D descriptors that are suitable for missiles, the contribution of this thesis is a 3D Global based descriptor and four 3D Local based descriptors namely the SURF Projection recognition (SPR), the Histogram of Distances (HoD), the processing efficient variant (HoD-S) and the binary variant B-HoD. These are challenged against current state-of-the-art 3D descriptors on standard commercial datasets, as well as on highly credible simulated air-to-ground missile engagement scenarios that consider various platform parameters and nuisances including simulated scale change and atmospheric disturbances. The results obtained over the different datasets showed an outstanding computational improvement, on average x19 times faster than state-of-the-art techniques in the literature, while maintaining or even improving on some occasions the detection rate to a minimum of 90% and over of correct classified targets

    SAR automatic target recognition based on convolutional neural networks

    Get PDF
    We propose a multi-modal multi-discipline strategy appropriate for Automatic Target Recognition (ATR) on Synthetic Aperture Radar (SAR) imagery. Our architecture relies on a pre-trained, in the RGB domain, Convolutional Neural Network that is innovatively applied on SAR imagery, and is combined with multiclass Support Vector Machine classification. The multi-modal aspect of our architecture enforces the generalisation capabilities of our proposal, while the multi-discipline aspect bridges the modality gap. Even though our technique is trained in a single depression angle of 17°, average performance on the MSTAR database over a 10-class target classification problem in 15°, 30° and 45° depression is 97.8%. This multi-target and multi-depression ATR capability has not been reported yet in the MSTAR database literature
    corecore